103 research outputs found

    Mental Task Recognition by EEG Signals: A Novel Approach with ROC Analysis

    Get PDF
    Electroencephalogram or electroencephalography (EEG) has been widely used in medical fields and recently in cognitive science and brain-computer interface (BCI) research. To distinguish metal tasks such as reading, calculation, motor imagery, etc., it is generally to extract features of EEG signals by dimensionality reduction methods such as principle component analysis (PCA), linear determinant analysis (LDA), common spatial pattern (CSP), and so on for classifiers, for example, k-nearest neighbor method (kNN), kernel support vector machine (SVM), and artificial neural networks (ANN). In this chapter, a novel approach of feature extraction of EEG signals with receiver operating characteristic (ROC) analysis is introduced

    Training Deep Neural Networks with Reinforcement Learning for Time Series Forecasting

    Get PDF
    As a kind of efficient nonlinear function approximators, artificial neural networks (ANN) have been popularly applied to time series forecasting. The training method of ANN usually utilizes error back-propagation (BP) which is a supervised learning algorithm proposed by Rumelhart et al. in 1986; meanwhile, authors proposed to improve the robustness of the ANN for unknown time series prediction using a reinforcement learning algorithm named stochastic gradient ascent (SGA) originally proposed by Kimura and Kobayashi for control problems in 1998. We also successfully use a deep belief net (DBN) stacked by multiple restricted Boltzmann machines (RBMs) to realized time series forecasting in 2012. In this chapter, a state-of-the-art time series forecasting system that combines RBMs and multilayer perceptron (MLP) and uses SGA training algorithm is introduced. Experiment results showed the high prediction precision of the novel system not only for benchmark data but also for real phenomenon time series data

    Investigating the influence of PFC transection and nicotine on dynamics of AMPA and NMDA receptors of VTA dopaminergic neurons

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>All drugs of abuse, including nicotine, activate the mesocorticolimbic system that plays critical roles in nicotine reward and reinforcement development and triggers glutamatergic synaptic plasticity on the dopamine (DA) neurons in the ventral tegmental area (VTA). The addictive behavior and firing pattern of the VTA DA neurons are thought to be controlled by the glutamatergic synaptic input from prefrontal cortex (PFC). Interrupted functional input from PFC to VTA was shown to decrease the effects of the drug on the addiction process. Nicotine treatment could enhance the AMPA/NMDA ratio in VTA DA neurons, which is thought as a common addiction mechanism. In this study, we investigate whether or not the lack of glutamate transmission from PFC to VTA could make any change in the effects of nicotine.</p> <p>Methods</p> <p>We used the traditional AMPA/NMDA peak ratio, AMPA/NMDA area ratio, and KL (Kullback-Leibler) divergence analysis method for the present study.</p> <p>Results</p> <p>Our results using AMPA/NMDA peak ratio showed insignificant difference between PFC intact and transected and treated with saline. However, using AMPA/NMDA area ratio and KL divergence method, we observed a significant difference when PFC is interrupted with saline treatment. One possible reason for the significant effect that the PFC transection has on the synaptic responses (as indicated by the AMPA/NMDA area ratio and KL divergence) may be the loss of glutamatergic inputs. The glutamatergic input is one of the most important factors that contribute to the peak ratio level.</p> <p>Conclusions</p> <p>Our results suggested that even within one hour after a single nicotine injection, the peak ratio of AMPA/NMDA on VTA DA neurons could be enhanced.</p

    Parameterless-Growing-SOM and Its Application to a Voice Instruction Learning System

    Get PDF
    An improved self-organizing map (SOM), parameterless-growing-SOM (PL-G-SOM), is proposed in this paper. To overcome problems existed in traditional SOM (Kohonen, 1982), kinds of structure-growing-SOMs or parameter-adjusting-SOMs have been invented and usually separately. Here, we combine the idea of growing SOMs (Bauer and Villmann, 1997; Dittenbach et al. 2000) and a parameterless SOM (Berglund and Sitte, 2006) together to be a novel SOM named PL-G-SOM to realize additional learning, optimal neighborhood preservation, and automatic tuning of parameters. The improved SOM is applied to construct a voice instruction learning system for partner robots adopting a simple reinforcement learning algorithm. User's instructions of voices are classified by the PL-G-SOM at first, then robots choose an expected action according to a stochastic policy. The policy is adjusted by the reward/punishment given by the user of the robot. A feeling map is also designed to express learning degrees of voice instructions. Learning and additional learning experiments used instructions in multiple languages including Japanese, English, Chinese, and Malaysian confirmed the effectiveness of our proposed system
    corecore